-
-
Notifications
You must be signed in to change notification settings - Fork 39
Feature/nl parser #351
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature/nl parser #351
Conversation
📝 WalkthroughWalkthroughAdds NL-install: intent extraction (OpenAI/Ollama), stdin-aware prompts, interactive preview/edit/execute flows, fake-provider short-circuits in sandbox/docker sandboxes, docs for NLParser/stdin, and deterministic tests for NLParser and stdin handling. Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant CLI as cortex/cli.py
participant Interpreter as cortex/llm/interpreter.py
participant Sandbox as cortex/sandbox/sandbox_executor.py
User->>CLI: cortex install <nl-request> [--execute] (stdin?)
CLI->>CLI: _is_interactive() & fetch API key
CLI->>CLI: _build_prompt_with_stdin(if piped)
CLI->>Interpreter: extract_intent(user_prompt)
Interpreter->>Interpreter: _extract_intent_openai/_extract_intent_ollama
Interpreter-->>CLI: intent {action,domain,install_mode,...}
CLI->>Interpreter: parse(prompt) → commands
CLI-->>User: show preview (commands)
alt execute=false
CLI-->>User: end (preview)
else execute=true
alt interactive
CLI-->>User: options (proceed / edit / cancel)
User->>CLI: edits or confirms
else non-interactive
CLI->>CLI: auto-approve
end
CLI->>Sandbox: execute(commands, provider)
alt provider == "fake"
Sandbox-->>CLI: ExecutionResult (fake success)
else
Sandbox->>Sandbox: real execution
Sandbox-->>CLI: ExecutionResult
end
CLI-->>User: display result
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Suggested labels
Suggested reviewers
Poem
Pre-merge checks and finishing touches❌ Failed checks (2 warnings, 1 inconclusive)
✅ Passed checks (2 passed)
✨ Finishing touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 7
🧹 Nitpick comments (8)
tests/test_stdin_support.py (1)
1-2: Remove unused importio.The
iomodule is imported but never used in the test file.🔎 Proposed fix
-import io import sys from cortex.cli import CortexCLItests/test_nl_parser.py (1)
26-46: Consider testing actual implementation rather than duplicating logic.These tests duplicate the normalization logic inline (e.g.,
action.split("|")[0].strip(),if domain != "unknown": ambiguous = False) rather than importing and testing the actual implementation fromcortex/cli.py. This risks tests passing while the real code diverges.Consider extracting the normalization logic into a testable helper function in the source code and importing it here.
docs/nl_parser.md (1)
42-42: Minor formatting: add space before parenthesis.🔎 Proposed fix
-- Interactive confirmation to execute the commands(`yes / edit / no`) +- Interactive confirmation to execute the commands (`yes / edit / no`)cortex/cli.py (2)
32-40: Consider handling encoding errors in stdin reading.The
read_stdin()function reads stdin directly without specifying encoding handling. If the piped content contains non-UTF-8 bytes, this could raise an exception.🔎 Proposed fix
def read_stdin(): """ Read piped stdin safely (if present). """ if not sys.stdin.isatty(): - data = sys.stdin.read() + try: + data = sys.stdin.read() + except UnicodeDecodeError: + return None data = data.strip() return data if data else None return None
129-134: Add type hints tonotifymethod.Per coding guidelines, type hints are required for Python code. The
notifymethod is missing type annotations.🔎 Proposed fix
- def notify(self, args): + def notify(self, args: argparse.Namespace) -> int: """Handle notification commands"""cortex/llm/interpreter.py (3)
330-333: Consider expanding heuristic command patterns.The fallback heuristic only captures commands starting with
sudo,apt, orapt-get. This may miss valid install commands likepip install,dnf install,yum install, ordocker run. Consider expanding the pattern list or making it configurable.🔎 Proposed fix
# crude but safe: common install commands - if line.startswith(("sudo ", "apt ", "apt-get ")): + if line.startswith(("sudo ", "apt ", "apt-get ", "pip ", "pip3 ", "dnf ", "yum ", "docker ")): commands.append(line)
463-474: Add docstring to publicextract_intentmethod.Per coding guidelines, docstrings are required for all public APIs. This public method lacks documentation describing its purpose, parameters, return value, and potential exceptions.
🔎 Proposed fix
def extract_intent(self, user_input: str) -> dict: + """Extract user intent from natural language input. + + Args: + user_input: Natural language description of desired action + + Returns: + Dict containing action, domain, install_mode, description, + ambiguous flag, and confidence score + + Raises: + ValueError: If input is empty + NotImplementedError: If provider is Claude (not yet supported) + """ if not user_input or not user_input.strip(): raise ValueError("User input cannot be empty")
429-461: Remove_estimate_confidenceor integrate it into the LLM extraction logic.This method is defined but never called anywhere in the codebase. Since the LLM already returns a confidence score in the intent response, either remove this unused method or implement it as a fallback/validation mechanism if needed.
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (6)
cortex/cli.pycortex/llm/interpreter.pydocs/nl_parser.mddocs/stdin.mdtests/test_nl_parser.pytests/test_stdin_support.py
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
**/*.py: Follow PEP 8 style guide
Type hints required in Python code
Docstrings required for all public APIs
Files:
tests/test_stdin_support.pytests/test_nl_parser.pycortex/llm/interpreter.pycortex/cli.py
tests/**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
Maintain >80% test coverage for pull requests
Files:
tests/test_stdin_support.pytests/test_nl_parser.py
🧠 Learnings (2)
📚 Learning: 2025-12-11T12:03:24.071Z
Learnt from: CR
Repo: cortexlinux/cortex PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-11T12:03:24.071Z
Learning: Applies to **/*install*.py : Dry-run by default for all installations in command execution
Applied to files:
cortex/cli.py
📚 Learning: 2025-12-11T12:03:24.071Z
Learnt from: CR
Repo: cortexlinux/cortex PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-11T12:03:24.071Z
Learning: Applies to **/*install*.py : No silent sudo execution - require explicit user confirmation
Applied to files:
cortex/cli.py
🧬 Code graph analysis (2)
tests/test_stdin_support.py (1)
cortex/cli.py (1)
_build_prompt_with_stdin(52-63)
cortex/cli.py (2)
cortex/llm/interpreter.py (2)
extract_intent(463-474)parse(358-417)cortex/packages.py (1)
parse(383-427)
🪛 GitHub Actions: CI
tests/test_nl_parser.py
[error] 19-19: Ruff I001: Import block is un-sorted or un-formatted. Command: 'ruff check . --output-format=github'.
🪛 GitHub Check: lint
tests/test_nl_parser.py
[failure] 19-19: Ruff (I001)
tests/test_nl_parser.py:19:1: I001 Import block is un-sorted or un-formatted
🪛 GitHub Check: Lint
tests/test_nl_parser.py
[failure] 19-19: Ruff (I001)
tests/test_nl_parser.py:19:1: I001 Import block is un-sorted or un-formatted
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: test (3.10)
- GitHub Check: test (3.12)
- GitHub Check: test (3.11)
🔇 Additional comments (5)
tests/test_stdin_support.py (1)
7-21: LGTM!Tests effectively cover both code paths of
_build_prompt_with_stdin: with and without stdin data. The assertions validate the expected structure of the combined prompt.docs/nl_parser.md (1)
1-47: LGTM!Clear and comprehensive documentation that covers requirements, implementation details, and the human-in-the-loop workflow. This aligns well with the PR objectives and linked issue requirements.
cortex/cli.py (2)
52-63: LGTM!Clean implementation that safely combines stdin context with user prompts. The
getattrdefensive check is appropriate.
308-350: LGTM!The confirmation flow properly implements explicit user consent before execution, with options to proceed, edit, or cancel. This aligns with the requirement for no silent sudo execution and dry-run by default. Based on learnings, this is the expected behavior.
cortex/llm/interpreter.py (1)
106-146: LGTM!The
_extract_intent_ollamamethod has proper error handling with a structured fallback response when intent extraction fails. Good defensive coding.
2fc0318 to
51bd93d
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (4)
cortex/cli.py (2)
355-357: Remove duplicateextract_intentcall.
extract_intent(software)is called twice consecutively. This wastes an LLM API call, increases latency, and incurs unnecessary cost.🔎 Proposed fix
# -------- Intent understanding (NEW) -------- intent = interpreter.extract_intent(software) - intent = interpreter.extract_intent(software) # ---------- Extract install mode from intent ----------
361-365: Add defensive handling forconfidenceparsing.
float(intent.get("confidence", 0.0))can raiseValueErrorif the LLM returns a non-numeric string (e.g.,"high"or"0.8 approx"). Consider wrapping in try/except.🔎 Proposed fix
action = intent.get("action", "unknown") domain = intent.get("domain", "unknown") - confidence = float(intent.get("confidence", 0.0)) + try: + confidence = float(intent.get("confidence", 0.0)) + except (ValueError, TypeError): + confidence = 0.0 ambiguous = bool(intent.get("ambiguous", False))cortex/llm/interpreter.py (2)
183-191: JSON syntax error in intent prompt template.Line 187 has a syntax error in the JSON example:
"install_mode" "..."is missing the colon between key and value. This malformed example could confuse the LLM and produce invalid JSON responses.🔎 Proposed fix
Format: { "action": "...", "domain": "...", - "install_mode" "..." + "install_mode": "...", "description": "...", "ambiguous": true/false, "confidence": 0.0 }
211-223: Add error handling for API call and JSON parsing.Unlike
_extract_intent_ollama, this method lacks try/except handling. Both the API call andjson.loadscan raise exceptions that will propagate unhandled. Consider wrapping in try/except and falling back to a structured unknown intent (consistent with_extract_intent_ollama).🔎 Proposed fix
def _extract_intent_openai(self, user_input: str) -> dict: - response = self.client.chat.completions.create( - model=self.model, - messages=[ - {"role": "system", "content": self._get_intent_prompt()}, - {"role": "user", "content": user_input}, - ], - temperature=0.2, - max_tokens=300, - ) - - content = response.choices[0].message.content.strip() - return json.loads(content) + try: + response = self.client.chat.completions.create( + model=self.model, + messages=[ + {"role": "system", "content": self._get_intent_prompt()}, + {"role": "user", "content": user_input}, + ], + temperature=0.2, + max_tokens=300, + ) + content = response.choices[0].message.content.strip() + return self._parse_intent_from_text(content) + except Exception: + return { + "action": "unknown", + "domain": "unknown", + "install_mode": "system", + "description": "Failed to extract intent from OpenAI", + "ambiguous": True, + "confidence": 0.0, + }
🧹 Nitpick comments (4)
tests/test_stdin_support.py (3)
1-2: Remove unused imports.The
ioandsysimports are not used in this test file.🔎 Proposed fix
-import io -import sys - from cortex.cli import CortexCLI
7-10: Add type hints to test functions.As per coding guidelines, type hints are required in Python code. Consider adding return type annotations to test functions.
🔎 Proposed fix
-def test_build_prompt_without_stdin(): +def test_build_prompt_without_stdin() -> None: cli = CortexCLI() prompt = cli._build_prompt_with_stdin("install docker") assert prompt == "install docker"
13-21: Add type hints to test functions.As per coding guidelines, type hints are required in Python code. Consider adding return type annotations to test functions.
🔎 Proposed fix
-def test_build_prompt_with_stdin(): +def test_build_prompt_with_stdin() -> None: cli = CortexCLI() cli.stdin_data = "some context from stdin" prompt = cli._build_prompt_with_stdin("install docker")cortex/cli.py (1)
145-149: Consider adding a public save method to NotificationManager.Calling the private
_save_config()method from external code violates encapsulation. While the comment acknowledges this, it would be better to add a publicsave()orsave_config()method toNotificationManager.Note: This same pattern appears on lines 154 and 173.
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (6)
cortex/cli.pycortex/llm/interpreter.pydocs/nl_parser.mddocs/stdin.mdtests/test_nl_parser.pytests/test_stdin_support.py
🚧 Files skipped from review as they are similar to previous changes (2)
- docs/stdin.md
- tests/test_nl_parser.py
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
**/*.py: Follow PEP 8 style guide
Type hints required in Python code
Docstrings required for all public APIs
Files:
tests/test_stdin_support.pycortex/cli.pycortex/llm/interpreter.py
tests/**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
Maintain >80% test coverage for pull requests
Files:
tests/test_stdin_support.py
🧠 Learnings (2)
📚 Learning: 2025-12-11T12:03:24.071Z
Learnt from: CR
Repo: cortexlinux/cortex PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-11T12:03:24.071Z
Learning: Applies to **/*install*.py : Dry-run by default for all installations in command execution
Applied to files:
cortex/cli.py
📚 Learning: 2025-12-11T12:03:24.071Z
Learnt from: CR
Repo: cortexlinux/cortex PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-11T12:03:24.071Z
Learning: Applies to **/*install*.py : No silent sudo execution - require explicit user confirmation
Applied to files:
cortex/cli.py
🧬 Code graph analysis (2)
tests/test_stdin_support.py (1)
cortex/cli.py (2)
CortexCLI(33-859)_build_prompt_with_stdin(41-52)
cortex/cli.py (2)
cortex/llm/interpreter.py (2)
extract_intent(486-497)parse(379-440)cortex/packages.py (1)
parse(383-427)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: test (3.10)
- GitHub Check: test (3.11)
- GitHub Check: test (3.12)
🔇 Additional comments (13)
cortex/cli.py (7)
41-52: LGTM!The
_build_prompt_with_stdinmethod correctly handles optional stdin context. The defensive use ofgetattrensures the method doesn't fail ifstdin_datais not set.
121-127: LGTM!The early validation for missing subcommand prevents cascading errors and provides clear user feedback.
164-169: LGTM!The time format validation using
datetime.strptimecorrectly ensures valid HH:MM format before saving the DND window configuration.
367-374: LGTM!The intent normalization logic correctly handles unstable model output by:
- Splitting on pipe characters to extract the primary action
- Setting
ambiguous=Falsewhen a known domain is detectedThis aligns with the PR objective of stable behavior under LLM nondeterminism.
376-393: LGTM!The explicit intent display and early bailout for ambiguous/low-confidence requests provide excellent transparency and align with the "show reasoning" requirement from the issue.
403-414: LGTM!The install-mode-specific prompt generation correctly handles Python vs system installs, ensuring appropriate guidance for the LLM. The integration with
_build_prompt_with_stdinenables context-aware command generation.
438-480: LGTM!The interactive confirmation flow provides excellent user control with:
- Clear yes/edit/no options
- Line-by-line command editing capability
- Final confirmation after edits
- Safe cancellation at multiple points
This satisfies the "explicit user confirmation" and "edit or cancel planned commands" requirements. Based on learnings, this ensures no silent sudo execution.
cortex/llm/interpreter.py (6)
97-110: LGTM!The reformatted system prompt with explicit Rules, Format, and Example sections provides clear guidance to the LLM for generating safe, validated bash commands.
112-152: LGTM!The
_extract_intent_ollamamethod correctly:
- Constructs the intent extraction prompt
- Makes the HTTP request to the local Ollama instance
- Parses the response using
_parse_intent_from_text- Returns a safe unknown-intent fallback on failure
The error handling ensures graceful degradation when Ollama fails.
225-253: LGTM!The
_parse_intent_from_textmethod provides robust parsing of LLM output with:
- JSON extraction from loosely structured text
- Minimal structural validation for required fields
- Safe unknown-intent fallback on parsing failure
This defensive approach aligns with the PR goal of stable behavior under LLM nondeterminism.
317-358: LGTM!The enhanced
_parse_commandsmethod provides robust command extraction with:
- Code fence removal
- JSON blob isolation
- Strict JSON parsing with fallback
- Heuristic extraction for Ollama's loose output format
- Clear error when no valid commands are found
This multi-layered approach ensures compatibility across different LLM providers.
452-484: LGTM!The
_estimate_confidencemethod provides a reasonable heuristic for confidence scoring based on linguistic signals. The clamping to [0.0, 1.0] with a minimum of 0.2 ensures sensible defaults when the LLM doesn't provide confidence scores.
486-497: LGTM!The public
extract_intentAPI provides clean provider routing with:
- Input validation
- Provider-specific delegation
- Clear error messages for unsupported providers
- Explicit NotImplementedError for Claude (work in progress)
This design allows for easy extension to additional providers.
2f2320c to
af76e16
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
cortex/cli.py (1)
121-121: Add return type hint.The
notifymethod is missing a return type hint. As per coding guidelines, type hints are required for all public APIs in Python code.🔎 Proposed fix
- def notify(self, args): + def notify(self, args) -> int: """Handle notification commands"""
🧹 Nitpick comments (1)
cortex/cli.py (1)
41-52: Document wherestdin_datais set.The method uses
getattr(self, "stdin_data", None)to access an attribute that isn't initialized in__init__. While this pattern is safe, it would improve maintainability to either:
- Initialize
self.stdin_data = Nonein__init__(line 39), or- Add a comment explaining where this attribute is dynamically set
🔎 Proposed fix to initialize in __init__
def __init__(self, verbose: bool = False): self.spinner_chars = ["⠋", "⠙", "⠹", "⠸", "⠼", "⠴", "⠦", "⠧", "⠇", "⠏"] self.spinner_idx = 0 self.prefs_manager = None # Lazy initialization self.verbose = verbose self.offline = False + self.stdin_data = None # Set by caller when piping input
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
cortex/cli.pytests/test_nl_parser.py
🚧 Files skipped from review as they are similar to previous changes (1)
- tests/test_nl_parser.py
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
**/*.py: Follow PEP 8 style guide
Type hints required in Python code
Docstrings required for all public APIs
Files:
cortex/cli.py
🧠 Learnings (2)
📚 Learning: 2025-12-11T12:03:24.071Z
Learnt from: CR
Repo: cortexlinux/cortex PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-11T12:03:24.071Z
Learning: Applies to **/*install*.py : Dry-run by default for all installations in command execution
Applied to files:
cortex/cli.py
📚 Learning: 2025-12-11T12:03:24.071Z
Learnt from: CR
Repo: cortexlinux/cortex PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-11T12:03:24.071Z
Learning: Applies to **/*install*.py : No silent sudo execution - require explicit user confirmation
Applied to files:
cortex/cli.py
🧬 Code graph analysis (1)
cortex/cli.py (2)
cortex/first_run_wizard.py (1)
_print_error(746-748)cortex/llm/interpreter.py (1)
extract_intent(486-497)
🪛 GitHub Actions: CI
cortex/cli.py
[error] 1-1: Cortex CLI install flow is not triggering the interpreter parse step. Tests expect 'CommandInterpreter.parse' to be called with 'install docker', but it was not called. This causes downstream assertions (e.g., parse being called once) to fail.
[error] 1-1: Ambiguous request handling appears to short-circuit parsing (logs show 'Ambiguous' and 'Please clarify what you want to install.') instead of invoking the parser. This leads to the CLI not producing the expected parse/execute calls in install tests.
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: test (3.11)
- GitHub Check: test (3.10)
- GitHub Check: test (3.12)
🔇 Additional comments (4)
cortex/cli.py (4)
370-374: Good defensive handling for confidence parsing.The try/except block correctly handles non-numeric confidence values from the LLM, addressing the previous review concern. The fallback to
0.0is appropriate.
413-426: Good install mode differentiation.The conditional prompt building for Python vs. system installs is well-structured. The explicit guidance for Python mode (
"Use pip and Python virtual environments. Do NOT use sudo or system package managers.") aligns with the learnings that require no silent sudo execution and safe defaults.
448-490: Excellent user confirmation flow.The interactive confirmation with yes/edit/no options provides transparency and safety before execution. The ability to edit commands line-by-line is a great UX touch. This implementation aligns with the learnings that require explicit user confirmation and no silent sudo execution.
894-894: LGTM!The help table entry for the new
notifycommand is correctly added and follows the existing pattern.
04270bc to
9df4dd0
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
cortex/cli.py (1)
121-121: Add type hints to method signature.The method is missing type hints for the
argsparameter and return type, which violates the coding guidelines requiring type hints in Python code.🔎 Proposed fix
- def notify(self, args): + def notify(self, args: argparse.Namespace) -> int: """Handle notification commands"""
♻️ Duplicate comments (1)
cortex/cli.py (1)
395-405: Critical: Early returns still prevent command generation and preview.This is a continuation of the previously flagged critical issue. The early returns for ambiguous (lines 395-399) and low-confidence (lines 401-405) requests still prevent
interpreter.parse()(line 429) from being called, which:
- Breaks preview mode: Users cannot see what commands would be generated for low-confidence requests, defeating the "safe-by-default preview" behavior.
- Fails transparency goal: The PR objectives require "show AI reasoning to the user," but users never see the planned commands.
- May cause test failures: Tests expecting
parse()to be called will fail when requests are deemed ambiguous or low-confidence.The logic should allow command generation to proceed even for ambiguous/low-confidence requests, but prevent execution unless confidence is sufficient. This maintains transparency while staying safe-by-default.
🔎 Proposed fix
Move these checks after
interpreter.parse()at line 429:print(f"• Confidence : {confidence}") - # Handle ambiguous intent - if ambiguous and domain == "unknown": - print("\n❓ Your request is ambiguous.") - print("Please clarify what you want to install.") - return 0 - - # Handle low confidence - if confidence < 0.4 and execute: - print("\n🤔 I'm not confident I understood your request.") - print("Please rephrase with more details.") - return 1 - print() # spacing # ------------------------------------------- @@ -425,6 +415,18 @@ prompt = self._build_prompt_with_stdin(base_prompt) # --------------------------------------------------- commands = interpreter.parse(prompt) + + # Handle ambiguous intent (after showing commands) + if ambiguous and domain == "unknown" and not execute: + print("\n❓ Your request appears ambiguous.") + print("Review the commands above and use --execute if they look correct.") + return 0 + + # Handle low confidence (after showing commands) + if confidence < 0.4 and not execute: + print("\n🤔 Low confidence in understanding.") + print("Review the commands above carefully before using --execute.") + return 0 if not commands:Based on learnings, dry-run should be the default for all installations to ensure safe command execution.
🧹 Nitpick comments (2)
cortex/cli.py (2)
41-52: Consider expanding the docstring for clarity.The docstring is minimal. Consider documenting when
stdin_datais populated (e.g., when data is piped to the CLI) and how this affects prompt construction.🔎 Proposed enhancement
def _build_prompt_with_stdin(self, user_prompt: str) -> str: """ - Combine optional stdin context with user prompt. + Combine optional stdin context with user prompt. + + If stdin data was captured (e.g., via piping), prepends it as context + to the user's instruction. Otherwise, returns the prompt unchanged. + + Args: + user_prompt: The user's installation request or command + + Returns: + Combined prompt with stdin context, or original prompt if no stdin """
451-493: Excellent: Explicit confirmation flow aligns with safety requirements.The interactive confirmation workflow properly implements the "no silent sudo execution" requirement from the learnings. The edit capability is a valuable feature that gives users fine-grained control before execution.
One optional enhancement: consider adding signal handling or timeout for the
input()calls to prevent indefinite blocking in automated or CI environments.🔎 Optional enhancement for automation-friendly behavior
Consider detecting non-interactive environments (e.g., checking
sys.stdin.isatty()) and falling back to a default behavior or raising a clear error:if execute: if not sys.stdin.isatty(): self._print_error("Interactive confirmation required. Run in a terminal or use --dry-run.") return 1 print("\nDo you want to proceed with these commands?") # ... rest of confirmation flowBased on learnings, require explicit user confirmation for all package operations.
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
cortex/cli.pytests/test_nl_parser.py
🚧 Files skipped from review as they are similar to previous changes (1)
- tests/test_nl_parser.py
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
**/*.py: Follow PEP 8 style guide
Type hints required in Python code
Docstrings required for all public APIs
Files:
cortex/cli.py
🧠 Learnings (3)
📚 Learning: 2025-12-11T12:03:24.071Z
Learnt from: CR
Repo: cortexlinux/cortex PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-11T12:03:24.071Z
Learning: Applies to **/*install*.py : Implement audit logging to ~/.cortex/history.db for all package operations
Applied to files:
cortex/cli.py
📚 Learning: 2025-12-11T12:03:24.071Z
Learnt from: CR
Repo: cortexlinux/cortex PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-11T12:03:24.071Z
Learning: Applies to **/*install*.py : Dry-run by default for all installations in command execution
Applied to files:
cortex/cli.py
📚 Learning: 2025-12-11T12:03:24.071Z
Learnt from: CR
Repo: cortexlinux/cortex PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-11T12:03:24.071Z
Learning: Applies to **/*install*.py : No silent sudo execution - require explicit user confirmation
Applied to files:
cortex/cli.py
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: test (3.10)
- GitHub Check: test (3.12)
- GitHub Check: test (3.11)
🔇 Additional comments (1)
cortex/cli.py (1)
416-427: LGTM: Clean prompt construction with stdin support.The conditional prompt building based on
install_modeis well-structured, and the integration with_build_prompt_with_stdinproperly combines stdin context when available. The explicit guidance for Python environments (avoiding sudo/system packages) aligns with best practices.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
♻️ Duplicate comments (3)
cortex/llm/interpreter.py (2)
233-241: JSON syntax error in intent prompt template still present.Line 237 is missing a colon:
"install_mode" "..."should be"install_mode": "...". The previous review flagged this issue and it was marked as addressed, but the fix appears incomplete.🔎 Proposed fix
Format: { "action": "...", "domain": "...", - "install_mode" "..." + "install_mode": "...", "description": "...", "ambiguous": true/false, - "confidence": 0.0 + "confidence": 0.0 }
261-273: Missing error handling for API call and JSON parsing.Unlike
_call_openai, this method lacks try/except. Both the API call andjson.loadscan raise exceptions. The previous review flagged this and it was marked as addressed, but the fix wasn't applied.🔎 Proposed fix
def _extract_intent_openai(self, user_input: str) -> dict: + try: - response = self.client.chat.completions.create( - model=self.model, - messages=[ - {"role": "system", "content": self._get_intent_prompt()}, - {"role": "user", "content": user_input}, - ], - temperature=0.2, - max_tokens=300, - ) - - content = response.choices[0].message.content.strip() - return json.loads(content) + response = self.client.chat.completions.create( + model=self.model, + messages=[ + {"role": "system", "content": self._get_intent_prompt()}, + {"role": "user", "content": user_input}, + ], + temperature=0.2, + max_tokens=300, + ) + content = response.choices[0].message.content.strip() + return self._parse_intent_from_text(content) + except Exception: + return { + "action": "unknown", + "domain": "unknown", + "install_mode": "system", + "description": "Failed to extract intent from OpenAI", + "ambiguous": True, + "confidence": 0.0, + }cortex/cli.py (1)
371-374: Remove duplicate API key check.This API key validation duplicates lines 341-344. The previous review flagged this redundancy but it wasn't removed.
🔎 Proposed fix
self._print_status("🧠", "Understanding request...") - api_key = self._get_api_key() - if not api_key: - self._print_error("No API key configured") - return 1 - interpreter = CommandInterpreter(
🧹 Nitpick comments (2)
cortex/llm/interpreter.py (1)
559-570: Consider handling the FAKE provider for testing.The
FAKEprovider is defined inAPIProviderbut not handled inextract_intent. This will raiseValueErrorfor fake provider, which may break tests. Consider adding a fake intent response similar to_call_fake.🔎 Proposed fix
def extract_intent(self, user_input: str) -> dict: if not user_input or not user_input.strip(): raise ValueError("User input cannot be empty") if self.provider == APIProvider.OPENAI: return self._extract_intent_openai(user_input) elif self.provider == APIProvider.CLAUDE: raise NotImplementedError("Intent extraction not yet implemented for Claude") elif self.provider == APIProvider.OLLAMA: return self._extract_intent_ollama(user_input) + elif self.provider == APIProvider.FAKE: + # Return a predictable intent for testing + return { + "action": "install", + "domain": "testing", + "install_mode": "system", + "description": "Fake intent for testing", + "ambiguous": False, + "confidence": 1.0, + } else: raise ValueError(f"Unsupported provider: {self.provider}")cortex/cli.py (1)
414-424: Early returns may prevent command preview.When
ambiguous and domain == "unknown"(line 415) orconfidence < 0.4 and execute(line 421), the method returns before reachinginterpreter.parse()at line 448. This means users cannot see what commands would be generated for ambiguous requests, reducing transparency.Consider moving these checks after command generation to show the preview while still blocking execution.
🔎 Proposed fix
Move checks after
commands = interpreter.parse(prompt):commands = interpreter.parse(prompt) + + # Handle ambiguous intent (after showing commands) + if ambiguous and domain == "unknown": + print("\n❓ Your request appears ambiguous.") + if not execute: + print("Review the commands above and use --execute if they look correct.") + return 0 + + # Handle low confidence + if confidence < 0.4 and execute: + print("\n🤔 Low confidence. Review commands above carefully.") + return 1 if not commands:And remove the earlier checks at lines 414-424.
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
cortex/cli.pycortex/llm/interpreter.py
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
**/*.py: Follow PEP 8 style guide
Type hints required in Python code
Docstrings required for all public APIs
Files:
cortex/cli.pycortex/llm/interpreter.py
🧠 Learnings (2)
📚 Learning: 2025-12-11T12:03:24.071Z
Learnt from: CR
Repo: cortexlinux/cortex PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-11T12:03:24.071Z
Learning: Applies to **/*install*.py : Dry-run by default for all installations in command execution
Applied to files:
cortex/cli.py
📚 Learning: 2025-12-11T12:03:24.071Z
Learnt from: CR
Repo: cortexlinux/cortex PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-11T12:03:24.071Z
Learning: Applies to **/*install*.py : No silent sudo execution - require explicit user confirmation
Applied to files:
cortex/cli.py
🔇 Additional comments (7)
cortex/llm/interpreter.py (3)
147-161: LGTM!The updated system prompt with clear formatting, rules, and examples improves LLM response consistency.
275-303: LGTM!Robust JSON extraction with proper validation of required fields and a safe fallback for unparseable responses.
377-408: LGTM!Improved command filtering ensures only valid, non-empty string commands are returned.
cortex/cli.py (4)
35-46: LGTM!Clean implementation with safe
getattrusage for optional stdin context.
379-406: LGTM!Robust intent normalization with defensive type checking and the confidence parsing fix from the previous review. The domain-based ambiguity policy is clearly documented.
435-446: LGTM!Python install mode correctly avoids sudo and uses pip with virtual environments, aligning with the coding guideline for no silent sudo execution.
470-512: LGTM!Excellent implementation of the confirmation flow. The preview-only default and explicit execution confirmation align well with the coding guidelines for dry-run by default and no silent sudo execution. Based on learnings, this satisfies the safety requirements.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@pavanimanchala53 tests are failing and address coderabbitai comments, resolve conflicts as well.
CLA Verification FailedThe following contributors have not signed the Contributor License Agreement:
How to Sign
Verified Signers
This check runs automatically. Maintainers can update |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
cortex/llm/interpreter.py (1)
404-422: Dead code: object-array handling is unreachable.Line 404's condition
isinstance(commands, list)will always be true when reaching this point (sincedata.get("commands", [])returns a list). The early return at line 405 means lines 407-422 (handling[{"command": "..."}]format) are never executed.🔎 Proposed fix
Merge the logic into the list comprehension:
commands = data.get("commands", []) if isinstance(commands, list): - return [c for c in commands if isinstance(c, str) and c.strip()] - - # Handle both formats: - # 1. ["cmd1", "cmd2"] - direct string array - # 2. [{"command": "cmd1"}, {"command": "cmd2"}] - object array - result = [] - for cmd in commands: - if isinstance(cmd, str): - # Direct string - if cmd: - result.append(cmd) - elif isinstance(cmd, dict): - # Object with "command" key - cmd_str = cmd.get("command", "") - if cmd_str: - result.append(cmd_str) - - return result + result = [] + for cmd in commands: + if isinstance(cmd, str) and cmd.strip(): + result.append(cmd) + elif isinstance(cmd, dict): + cmd_str = cmd.get("command", "") + if cmd_str: + result.append(cmd_str) + return result + return []
♻️ Duplicate comments (4)
cortex/llm/interpreter.py (4)
519-551: Consider removing or using_estimate_confidence.This method is defined but not called anywhere. The
extract_intentmethods return LLM-provided confidence instead. Either remove this dead code or use it as a fallback when LLM confidence parsing fails.
179-183: Critical:self.ollama_urlis undefined — will causeAttributeError.
self.ollama_urlis used at line 180 but never assigned as an instance attribute. In_initialize_client,ollama_base_urlis only a local variable (line 113). This will crash when_extract_intent_ollamais called.🔎 Proposed fix
Store the base URL in
_initialize_client:elif self.provider == APIProvider.OLLAMA: # Ollama uses OpenAI-compatible API try: from openai import OpenAI ollama_base_url = os.environ.get("OLLAMA_BASE_URL", "http://localhost:11434") + self.ollama_url = ollama_base_url self.client = OpenAI( api_key="ollama", base_url=f"{ollama_base_url}/v1" # Dummy key, not used )
230-238: JSON syntax error persists in intent prompt template.Line 234 has
"install_mode" "..."— missing the colon between key and value. This malformed example may cause the LLM to produce invalid JSON responses.🔎 Proposed fix
Format: { "action": "...", "domain": "...", - "install_mode" "..." + "install_mode": "...", "description": "...", "ambiguous": true/false, - "confidence": 0.0 + "confidence": 0.0-1.0 }
258-270: Add error handling for API call and JSON parsing.Unlike
_call_openai, this method lacks try/except handling. Both the API call (line 259) andjson.loads(line 270) can raise exceptions that will propagate unhandled, crashing the CLI.🔎 Proposed fix
def _extract_intent_openai(self, user_input: str) -> dict: + try: response = self.client.chat.completions.create( model=self.model, messages=[ {"role": "system", "content": self._get_intent_prompt()}, {"role": "user", "content": user_input}, ], temperature=0.2, max_tokens=300, ) content = response.choices[0].message.content.strip() - return json.loads(content) + return self._parse_intent_from_text(content) + except Exception: + return { + "action": "unknown", + "domain": "unknown", + "install_mode": "system", + "description": "Failed to extract intent", + "ambiguous": True, + "confidence": 0.0, + }
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
cortex/cli.pycortex/llm/interpreter.py
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
**/*.py: Follow PEP 8 style guide
Type hints required in Python code
Docstrings required for all public APIs
Files:
cortex/cli.pycortex/llm/interpreter.py
🧠 Learnings (3)
📚 Learning: 2025-12-11T12:03:24.071Z
Learnt from: CR
Repo: cortexlinux/cortex PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-11T12:03:24.071Z
Learning: Applies to **/*install*.py : Implement audit logging to ~/.cortex/history.db for all package operations
Applied to files:
cortex/cli.py
📚 Learning: 2025-12-11T12:03:24.071Z
Learnt from: CR
Repo: cortexlinux/cortex PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-11T12:03:24.071Z
Learning: Applies to **/*install*.py : Dry-run by default for all installations in command execution
Applied to files:
cortex/cli.py
📚 Learning: 2025-12-11T12:03:24.071Z
Learnt from: CR
Repo: cortexlinux/cortex PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-11T12:03:24.071Z
Learning: Applies to **/*install*.py : No silent sudo execution - require explicit user confirmation
Applied to files:
cortex/cli.py
🪛 GitHub Actions: CI
cortex/cli.py
[error] 604-604: ruff check failed with F821: Undefined name 'install_mode' in cortex/cli.py:604.
🪛 GitHub Check: lint
cortex/cli.py
[failure] 604-604: Ruff (F821)
cortex/cli.py:604:16: F821 Undefined name install_mode
🪛 GitHub Check: Lint
cortex/cli.py
[failure] 604-604: Ruff (F821)
cortex/cli.py:604:16: F821 Undefined name install_mode
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: test (3.11)
- GitHub Check: test (3.12)
- GitHub Check: test (3.10)
🔇 Additional comments (4)
cortex/cli.py (3)
40-51: LGTM — helper correctly combines stdin context with prompts.The implementation safely checks for the optional
stdin_dataattribute and formats the combined prompt appropriately.
119-186: Notify command implementation looks solid.Good defensive handling for missing subcommands (line 123), proper time format validation (lines 163-168), and clear user feedback. The comment at line 145-146 acknowledges the use of the private
_save_config()method — consider exposing a publicsave()method onNotificationManagerin a follow-up.
565-568: The API key check at lines 565-568 is necessary and not redundant. This check is in theinstallmethod and includes an error message. A separate check exists at line 524 in theaskmethod, but these serve different contexts and both are appropriate—each public method should validate API key availability before LLM operations. No changes needed.Likely an incorrect or invalid review comment.
cortex/llm/interpreter.py (1)
553-564: LGTM with note on FAKE provider.The routing logic is correct. The FAKE provider falls through to
ValueError, which is acceptable for a test-only provider, but consider adding an explicit case for clarity or returning a mock intent for testing purposes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
cortex/cli.py (3)
1-1: Critical: Fix Black formatting to pass CI.The CI pipeline reports that this file needs Black formatting. Run
black cortex/cli.pyto resolve.#!/bin/bash # Verify Black formatting issues black --check cortex/cli.py
121-187: Add docstring and type hints.This public method lacks a docstring and type hints, which are required per coding guidelines.
🔎 Proposed enhancement
- def notify(self, args): + def notify(self, args: argparse.Namespace) -> int: """Handle notification commands""" + """ + Manage desktop notifications (config, enable, disable, DND, send). + + Args: + args: Parsed command-line arguments containing notify_action and related options + + Returns: + Exit code (0 for success, 1 for error) + """
554-560: Add return type hint and comprehensive docstring.This public method lacks a return type hint and comprehensive docstring explaining parameters, return value, and behavior.
🔎 Proposed enhancement
def install( self, software: str, execute: bool = False, dry_run: bool = False, parallel: bool = False, - ): + ) -> int: + """ + Install software using natural language commands. + + Args: + software: Natural language description of what to install + execute: If True, execute commands after confirmation (default: False for preview-only) + dry_run: If True, show commands without executing (default: False) + parallel: If True, enable parallel execution for multi-step installs (default: False) + + Returns: + Exit code (0 for success, 1 for error) + """
♻️ Duplicate comments (1)
cortex/cli.py (1)
641-688: Add EOFError handling for input() calls.The interactive confirmation flow lacks exception handling for
EOFErrorandKeyboardInterrupt, which can occur when stdin is closed or the user presses Ctrl+D/Ctrl+C. While the non-interactive check at line 644 provides a safety net, it doesn't protect against EOF occurring during interaction.🔎 Proposed fix
if execute: if not _is_interactive(): # Non-interactive mode (pytest / CI) → auto-approve choice = "y" else: - print("\nDo you want to proceed with these commands?") - print(" [y] Yes, execute") - print(" [e] Edit commands") - print(" [n] No, cancel") - choice = input("Enter choice [y/e/n]: ").strip().lower() + try: + print("\nDo you want to proceed with these commands?") + print(" [y] Yes, execute") + print(" [e] Edit commands") + print(" [n] No, cancel") + choice = input("Enter choice [y/e/n]: ").strip().lower() + except (EOFError, KeyboardInterrupt): + print("\n❌ Installation cancelled.") + return 0 if choice == "n": print("❌ Installation cancelled by user.") return 0 elif choice == "e": if not _is_interactive(): self._print_error("Cannot edit commands in non-interactive mode") return 1 - edited_commands = [] - while True: - line = input("> ").strip() - if not line: - break - edited_commands.append(line) + try: + edited_commands = [] + while True: + line = input("> ").strip() + if not line: + break + edited_commands.append(line) + except (EOFError, KeyboardInterrupt): + print("\n❌ Editing cancelled.") + return 0 if not edited_commands: print("❌ No commands provided. Cancelling.") return 1 commands = edited_commands print("\n✅ Updated commands:") for i, cmd in enumerate(commands, 1): print(f" {i}. {cmd}") - confirm = input("\nExecute edited commands? [y/n]: ").strip().lower() - if confirm != "y": - print("❌ Installation cancelled.") - return 0 + try: + confirm = input("\nExecute edited commands? [y/n]: ").strip().lower() + if confirm != "y": + print("❌ Installation cancelled.") + return 0 + except (EOFError, KeyboardInterrupt): + print("\n❌ Installation cancelled.") + return 0
🧹 Nitpick comments (2)
cortex/cli.py (2)
33-34: Add type hint and docstring.The function lacks a return type hint and docstring. Per coding guidelines, type hints are required, and a brief docstring would clarify the purpose.
🔎 Proposed enhancement
-def _is_interactive(): +def _is_interactive() -> bool: + """Check if stdin is connected to an interactive terminal.""" return sys.stdin.isatty()
41-52: Add type hints for method parameters and return value.The method is missing type annotations. Per coding guidelines, type hints are required.
🔎 Proposed enhancement
- def _build_prompt_with_stdin(self, user_prompt: str) -> str: + def _build_prompt_with_stdin(self, user_prompt: str) -> str: """ Combine optional stdin context with user prompt. """Wait, I see the type hints are already present in the annotated code at line 41. Let me re-check the code...
Actually looking at the code again, line 41 shows:
def _build_prompt_with_stdin(self, user_prompt: str) -> str:The type hints ARE present. So this is actually fine. Let me mark this as approved.
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
cortex/cli.py
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
**/*.py: Follow PEP 8 style guide
Type hints required in Python code
Docstrings required for all public APIs
Files:
cortex/cli.py
🧠 Learnings (1)
📚 Learning: 2025-12-11T12:03:24.071Z
Learnt from: CR
Repo: cortexlinux/cortex PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-11T12:03:24.071Z
Learning: Applies to **/*install*.py : Dry-run by default for all installations in command execution
Applied to files:
cortex/cli.py
🧬 Code graph analysis (1)
cortex/cli.py (2)
cortex/llm/interpreter.py (2)
extract_intent(553-564)parse(449-507)cortex/packages.py (1)
parse(383-427)
🪛 GitHub Actions: CI
cortex/cli.py
[error] 1-1: Black formatting check failed. 1 file would be reformatted. Run 'black .' to format.
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: test (3.12)
- GitHub Check: test (3.11)
- GitHub Check: test (3.10)
🔇 Additional comments (6)
cortex/cli.py (6)
41-52: LGTM!The method has proper type hints and a clear docstring. The logic correctly prepends stdin context when available.
566-569: LGTM!The early API key validation is appropriate here and provides fail-fast behavior.
597-598: LGTM!Intent extraction is called once with appropriate error handling downstream. The
install_modehas a safe default value.
606-617: LGTM!The install-mode-aware prompt construction correctly tailors guidance for Python vs. system installations, aligning with the PR's safety-by-default objectives.
619-619: LGTM!Command parsing correctly uses the enhanced prompt with stdin context.
689-696: LGTM!The fake provider handling correctly prevents execution while still allowing command generation for testing purposes.
86986f2 to
067c3ca
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (7)
cortex/cli.py (7)
34-35: Add type hint and docstring for module-level function.As per coding guidelines, all functions should have type hints. Consider adding a return type annotation and a brief docstring for clarity.
🔎 Proposed improvement
-def _is_interactive(): +def _is_interactive() -> bool: + """Check if the current session is interactive (has a TTY).""" return sys.stdin.isatty()
124-124: Add type hints and docstring for public method.As per coding guidelines, public methods require type hints and docstrings.
🔎 Proposed improvement
- def notify(self, args): + def notify(self, args: argparse.Namespace) -> int: + """Handle notification commands (config/enable/disable/dnd/send). + + Args: + args: Parsed command-line arguments with notify_action subcommand + + Returns: + Exit code (0 for success, 1 for error) + """ """Handle notification commands"""
148-151: Use public API instead of private method.The code calls
mgr._save_config(), a private method. The comment at lines 149-150 acknowledges this should use a public method. Consider adding a publicsave_config()orsave()method toNotificationManagerto avoid directly calling private methods.
630-639: Consider consistent output formatting for fake provider.The fake provider short-circuit uses
self._print_status()for the first message but plainprint()for the commands list. For consistency with the rest of the CLI, consider usingconsole.print()orcx_print()throughout, unless the plain output is intentional for testing purposes.🔎 Proposed improvement
if provider == "fake": self._print_status("🧪", "Fake provider detected - skipping execution") - print("\nGenerated commands:") + console.print("\n[bold]Generated commands:[/bold]") for i, cmd in enumerate(commands, 1): - print(f" {i}. {cmd}") + console.print(f" {i}. {cmd}") return 0
655-702: Add exception handling for input() edge cases.While
_is_interactive()guards the initial prompt, theinput()calls at lines 666, 679, and 694 can still raiseEOFErrororKeyboardInterruptif stdin is closed mid-session (e.g., pipe closes, SSH disconnect). Adding try-except blocks would improve robustness for edge cases.🔎 Proposed improvement
if not _is_interactive(): # Non-interactive mode (pytest / CI) → auto-approve choice = "y" else: - print("\nDo you want to proceed with these commands?") - print(" [y] Yes, execute") - print(" [e] Edit commands") - print(" [n] No, cancel") - choice = input("Enter choice [y/e/n]: ").strip().lower() + try: + print("\nDo you want to proceed with these commands?") + print(" [y] Yes, execute") + print(" [e] Edit commands") + print(" [n] No, cancel") + choice = input("Enter choice [y/e/n]: ").strip().lower() + except (EOFError, KeyboardInterrupt): + print("\n❌ Installation cancelled.") + return 0Apply similar handling to the edit loop (line 679) and edited-commands confirmation (line 694).
677-692: Improve edit mode UX and consider command validation.The edit loop lacks clear instructions for users. Consider:
- Adding a prompt explaining how to exit (e.g., "Enter commands line by line. Press Enter on an empty line to finish:")
- Validating edited commands through the interpreter's safety checks before execution
🔎 Proposed improvement
if not _is_interactive(): self._print_error("Cannot edit commands in non-interactive mode") return 1 + print("\nEnter commands line by line. Press Enter on an empty line to finish:") edited_commands = [] while True: line = input("> ").strip()For validation, consider calling
interpreter._validate_commands(edited_commands)before asking for final confirmation.
557-878: Consider refactoring theinstall()method for better maintainability.The
install()method is over 300 lines long and handles multiple responsibilities (validation, intent extraction, command generation, confirmation, execution, history tracking). Consider extracting logical units into smaller private methods (e.g.,_handle_install_confirmation(),_execute_install_commands(),_handle_fake_provider()) to improve readability and testability.
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
cortex/cli.py
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
**/*.py: Follow PEP 8 style guide
Type hints required in Python code
Docstrings required for all public APIs
Files:
cortex/cli.py
🧠 Learnings (2)
📚 Learning: 2025-12-11T12:03:24.071Z
Learnt from: CR
Repo: cortexlinux/cortex PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-11T12:03:24.071Z
Learning: Applies to **/*install*.py : Dry-run by default for all installations in command execution
Applied to files:
cortex/cli.py
📚 Learning: 2025-12-11T12:03:24.071Z
Learnt from: CR
Repo: cortexlinux/cortex PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-11T12:03:24.071Z
Learning: Applies to **/*install*.py : No silent sudo execution - require explicit user confirmation
Applied to files:
cortex/cli.py
🧬 Code graph analysis (1)
cortex/cli.py (2)
cortex/llm/interpreter.py (2)
extract_intent(553-564)parse(449-507)cortex/packages.py (1)
parse(383-427)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
- GitHub Check: Build Package
- GitHub Check: test (3.11)
- GitHub Check: test (3.10)
- GitHub Check: test (3.12)
067c3ca to
91ce6bb
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (2)
cortex/sandbox/sandbox_executor.py (1)
516-516: Document theproviderattribute.The
providerattribute is retrieved usinggetattrbut is never initialized in__init__. Consider documenting when and how this attribute should be set, or adding it as an optional parameter to__init__if it's intended to be part of the public API.cortex/cli.py (1)
34-35: Optional: Add type hint and docstring.The function is correct but missing a return type hint (
-> bool) and docstring as required by the coding guidelines (PEP 8, type hints required, docstrings required for all public APIs).🔎 Proposed improvement
-def _is_interactive(): +def _is_interactive() -> bool: + """Check if stdin is connected to a terminal (TTY).""" return sys.stdin.isatty()
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
cortex/cli.pycortex/sandbox/sandbox_executor.py
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
**/*.py: Follow PEP 8 style guide
Type hints required in Python code
Docstrings required for all public APIs
Files:
cortex/sandbox/sandbox_executor.pycortex/cli.py
**/*sandbox*.py
📄 CodeRabbit inference engine (AGENTS.md)
Implement Firejail sandboxing for package operations
Files:
cortex/sandbox/sandbox_executor.py
🧠 Learnings (2)
📚 Learning: 2025-12-11T12:03:24.071Z
Learnt from: CR
Repo: cortexlinux/cortex PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-11T12:03:24.071Z
Learning: Applies to **/*install*.py : Dry-run by default for all installations in command execution
Applied to files:
cortex/cli.py
📚 Learning: 2025-12-11T12:03:24.071Z
Learnt from: CR
Repo: cortexlinux/cortex PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-11T12:03:24.071Z
Learning: Applies to **/*install*.py : No silent sudo execution - require explicit user confirmation
Applied to files:
cortex/cli.py
🧬 Code graph analysis (1)
cortex/cli.py (4)
cortex/first_run_wizard.py (1)
_print_error(746-748)cortex/llm/interpreter.py (2)
extract_intent(553-564)parse(449-507)cortex/packages.py (1)
parse(383-427)cortex/sandbox/sandbox_executor.py (1)
execute(501-645)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
- GitHub Check: Build Package
- GitHub Check: test (3.11)
- GitHub Check: test (3.12)
- GitHub Check: test (3.10)
🔇 Additional comments (4)
cortex/cli.py (4)
600-621: LGTM - Clean install strategy selection.The intent extraction and prompt building logic correctly implements the install strategy selection described in the PR objectives. The explicit guidance for Python mode (pip + venv, no sudo) aligns with best practices and the learnings about avoiding silent sudo execution.
630-639: LGTM - Good testing pattern.The FAKE provider short-circuit is well-placed after command generation but before user confirmation, ensuring test environments don't block on interactive prompts. The explicit comment documents the positioning requirement clearly.
655-702: LGTM - Robust confirmation flow implementing safe-by-default execution.The confirmation flow correctly implements the PR objectives:
- Preview-only by default (requires
--execute)- Interactive confirmation with clear options (yes/edit/no)
- Auto-approval in non-interactive mode only when
--executeis explicit- Edit capability allows command modification before execution
The
_is_interactive()checks prevent blocking on CI/piped input while maintaining explicit confirmation in interactive sessions, consistent with the learnings about avoiding silent sudo execution.
124-190: LGTM - Well-structured notify command implementation.The notify command implementation is clean with proper validation:
- Graceful handling of missing subcommand (lines 127-129)
- Time format validation for DND window (lines 167-172)
- Consistent error handling and user feedback
- Appropriate return codes throughout
The use of
_save_config()is acknowledged in the code comment; ideally this would be a public method, but the current approach is functional.
6f9af4c to
a077404
Compare
- Remove broken third-party CLA Assistant action - Add custom Python-based CLA check script - Add cla-signers.json for managing signers - Add CLA signature issue template - Update Contributing.md with new signing process The custom solution: - Checks all commit authors including co-authors - Supports individual and corporate signers - Supports email domain matching for corporations - Posts clear GitHub comments with next steps - Sets commit status for branch protection
Implements `cortex import` command that parses and installs dependencies from requirements.txt, package.json, Gemfile, Cargo.toml, and go.mod files. Features: - Dry-run by default, --execute flag to install - --dev flag to include dev dependencies - --all flag to scan directory for all dependency files - Y/n confirmation for --all --execute - 90% test coverage with 80 unit tests Closes cortexlinux#126
…rove error messaging
d566590 to
634d90e
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (7)
cortex/sandbox/docker_sandbox.py (1)
776-780: Fix f-string interpolation in cleanup message.Line 779 is missing a space before
{name}, causing the placeholder to not interpolate correctly. The message will render as"Fake provider: cleanup skipped for sandbox{name}"literally instead of including the actual sandbox name.🔎 Proposed fix
if self.provider == "fake": return SandboxExecutionResult( success=True, - message=f"Fake provider: cleanup skipped for sandbox{name}", + message=f"Fake provider: cleanup skipped for sandbox {name}", )cortex/llm/interpreter.py (5)
179-183: Critical:self.ollama_urlis undefined.Line 180 references
self.ollama_urlwhich is never defined as an instance attribute. The_initialize_clientmethod (line 113) only uses a local variableollama_base_urlwithout storing it onself.🔎 Proposed fix
In
_initialize_client(around line 113), store the base URL:ollama_base_url = os.environ.get("OLLAMA_BASE_URL", "http://localhost:11434") self.ollama_url = ollama_base_url # Add this line self.client = OpenAI( api_key="ollama", base_url=f"{ollama_base_url}/v1" )
230-238: Fix JSON syntax error in intent prompt template.Line 234 has a malformed JSON example:
"install_mode" "..."is missing the colon between key and value. This could confuse the LLM and lead to invalid JSON responses.🔎 Proposed fix
Format: { "action": "...", "domain": "...", - "install_mode" "..." + "install_mode": "...", "description": "...", "ambiguous": true/false, "confidence": 0.0 }
258-270: Add error handling for OpenAI intent extraction.Line 270's
json.loads(content)can raiseJSONDecodeError, and the API call itself can raise exceptions. These will propagate unhandled, unlike the error handling in_call_openai.🔎 Proposed fix
def _extract_intent_openai(self, user_input: str) -> dict: - response = self.client.chat.completions.create( - model=self.model, - messages=[ - {"role": "system", "content": self._get_intent_prompt()}, - {"role": "user", "content": user_input}, - ], - temperature=0.2, - max_tokens=300, - ) - - content = response.choices[0].message.content.strip() - return json.loads(content) + try: + response = self.client.chat.completions.create( + model=self.model, + messages=[ + {"role": "system", "content": self._get_intent_prompt()}, + {"role": "user", "content": user_input}, + ], + temperature=0.2, + max_tokens=300, + ) + content = response.choices[0].message.content.strip() + return self._parse_intent_from_text(content) + except Exception: + return { + "action": "unknown", + "domain": "unknown", + "install_mode": "system", + "description": "Failed to extract intent from OpenAI", + "ambiguous": True, + "confidence": 0.0, + }
272-300: Add missinginstall_modekey to fallback dict.Line 285 validates that
install_modeexists in parsed JSON, but the fallback dict (lines 294-300) doesn't include it. This will cause issues when downstream code accessesintent.get("install_mode").🔎 Proposed fix
# If parsing fails, do NOT guess meaning return { "action": "unknown", "domain": "unknown", + "install_mode": "system", "description": "Unstructured intent output", "ambiguous": True, "confidence": 0.0, }
191-199: Add missinginstall_modekey to Ollama fallback dict.For consistency with
_parse_intent_from_text, the fallback dict at line 193 should includeinstall_mode.🔎 Proposed fix
except Exception: # True failure → unknown intent return { "action": "unknown", "domain": "unknown", + "install_mode": "system", "description": "Failed to extract intent", "ambiguous": True, "confidence": 0.0, }cortex/cli.py (1)
656-702: Add EOFError handling for interactive input calls.The
input()calls at lines 666, 679, and 694 can raiseEOFErrorwhen stdin is closed (e.g., in CI or piped input). The sandbox promote flow (lines 450-458) already handles this pattern correctly with try/except blocks.🔎 Proposed fix
Wrap the confirmation block:
if execute: - if not _is_interactive(): - # Non-interactive mode (pytest / CI) → auto-approve - choice = "y" - else: - print("\nDo you want to proceed with these commands?") - print(" [y] Yes, execute") - print(" [e] Edit commands") - print(" [n] No, cancel") - choice = input("Enter choice [y/e/n]: ").strip().lower() + try: + if not _is_interactive(): + choice = "y" + else: + print("\nDo you want to proceed with these commands?") + print(" [y] Yes, execute") + print(" [e] Edit commands") + print(" [n] No, cancel") + choice = input("Enter choice [y/e/n]: ").strip().lower() + except (EOFError, KeyboardInterrupt): + print("\n❌ Installation cancelled.") + return 0Apply similar wrapping to the edit loop (lines 677-697) input calls.
🧹 Nitpick comments (1)
test_parallel_llm.py (1)
1-314: Test script looks good overall, consider pytest migration.This is a well-structured manual test script for validating parallel LLM functionality. The tests are comprehensive and include proper error handling.
However, consider migrating this to pytest-based tests with fixtures for better CI integration:
- Use pytest fixtures for router initialization
- Use pytest.mark.skipif for API key checks
- Add pytest.mark.asyncio for async tests
- Mock API calls for deterministic CI testing
Example pytest migration pattern:
import pytest @pytest.fixture def router(): return LLMRouter() @pytest.mark.asyncio @pytest.mark.skipif(not os.getenv("ANTHROPIC_API_KEY"), reason="No API key") async def test_async_completion(router): response = await router.acomplete(...) assert response.content assert response.tokens_used > 0
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (10)
.github/cla-signers.jsoncortex/cli.pycortex/llm/interpreter.pycortex/sandbox/docker_sandbox.pycortex/sandbox/sandbox_executor.pydocs/nl_parser.mddocs/stdin.mdtest_parallel_llm.pytests/test_nl_parser.pytests/test_stdin_support.py
🚧 Files skipped from review as they are similar to previous changes (4)
- cortex/sandbox/sandbox_executor.py
- docs/stdin.md
- tests/test_stdin_support.py
- docs/nl_parser.md
🧰 Additional context used
📓 Path-based instructions (3)
**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
**/*.py: Follow PEP 8 style guide
Type hints required in Python code
Docstrings required for all public APIs
Files:
cortex/sandbox/docker_sandbox.pytest_parallel_llm.pycortex/cli.pytests/test_nl_parser.pycortex/llm/interpreter.py
**/*sandbox*.py
📄 CodeRabbit inference engine (AGENTS.md)
Implement Firejail sandboxing for package operations
Files:
cortex/sandbox/docker_sandbox.py
tests/**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
Maintain >80% test coverage for pull requests
Files:
tests/test_nl_parser.py
🧠 Learnings (2)
📚 Learning: 2025-12-11T12:03:24.071Z
Learnt from: CR
Repo: cortexlinux/cortex PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-11T12:03:24.071Z
Learning: Applies to **/*install*.py : Dry-run by default for all installations in command execution
Applied to files:
cortex/cli.py
📚 Learning: 2025-12-11T12:03:24.071Z
Learnt from: CR
Repo: cortexlinux/cortex PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-11T12:03:24.071Z
Learning: Applies to **/*install*.py : No silent sudo execution - require explicit user confirmation
Applied to files:
cortex/cli.pytests/test_nl_parser.py
🧬 Code graph analysis (3)
cortex/sandbox/docker_sandbox.py (1)
cortex/sandbox/sandbox_executor.py (1)
success(74-76)
test_parallel_llm.py (1)
cortex/llm_router.py (8)
LLMRouter(76-846)TaskType(32-42)check_hardware_configs_parallel(979-1027)diagnose_errors_parallel(930-976)query_multiple_packages(880-927)acomplete(543-609)complete_batch(765-846)set_rate_limit(534-541)
cortex/cli.py (1)
cortex/first_run_wizard.py (1)
_print_error(746-748)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
- GitHub Check: Build Package
- GitHub Check: test (3.10)
- GitHub Check: test (3.12)
- GitHub Check: test (3.11)
🔇 Additional comments (3)
.github/cla-signers.json (1)
1-70: No changes required. The review comment is based on unverified claims.The git history shows no evidence that "pavani manchala" or "Sahil Bhatane" were ever entries in the
.github/cla-signers.jsonfile. Searches usinggit log -p -Sfound zero matches for either name across the entire repository history. Since these entries do not appear to have existed in the file at any point, the premise of the review comment—that they were removed in this change—cannot be substantiated.Likely an incorrect or invalid review comment.
tests/test_nl_parser.py (1)
1-189: LGTM: Well-structured deterministic unit tests.These tests appropriately focus on deterministic logic-level behavior (intent normalization, prompt construction, execution gating, safety checks) rather than non-deterministic LLM calls. This is the correct approach for reliable unit tests in a CI pipeline.
The test coverage aligns well with the PR objectives around preview/execute behavior, safety checks, and domain understanding.
cortex/cli.py (1)
44-53: Good implementation of stdin-aware prompt building.The use of
getattr(self, "stdin_data", None)is a safe pattern that handles the case wherestdin_datamay not be initialized, avoidingAttributeError.
Anshgrover23
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@pavanimanchala53 While merging main branch into your PR you have accidently merged some of the other commit files, Kindly remove them from this PR.
| # Auto-assign reviewers | ||
| * @mikejmorgan-ai | ||
| * @Suyashd999 | ||
| * @Anshgrover23 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why this file changed ?
| }, | ||
| { | ||
| "name": "Sahil Bhatane", | ||
| "github_username": "Sahilbhatane", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Again, why this one ?
634d90e to
b76e35c
Compare
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (8)
cortex/llm/interpreter.py (6)
528-560: Consider removing unused_estimate_confidencemethod.This method is defined but has no usages anywhere in the codebase. The
extract_intentmethod uses LLM-provided confidence rather than this heuristic, making the method dead code.Based on past review comments, this was previously flagged as unused.
179-183: Critical:self.ollama_urlis undefined — will causeAttributeError.The
_extract_intent_ollamamethod referencesself.ollama_urlon line 180, but this attribute is never defined. In_initialize_client,ollama_base_urlis a local variable that isn't stored as an instance attribute.🔎 Proposed fix
Store the base URL in
_initialize_client(around line 113):elif self.provider == APIProvider.OLLAMA: try: from openai import OpenAI ollama_base_url = os.environ.get("OLLAMA_BASE_URL", "http://localhost:11434") + self.ollama_url = ollama_base_url self.client = OpenAI( api_key="ollama", base_url=f"{ollama_base_url}/v1" )Based on past review comments, this was previously flagged but remains unaddressed.
191-199: Addinstall_modeto fallback dict for consistency.The fallback return dict in
_extract_intent_ollamais missing theinstall_modekey, which is validated as required in_parse_intent_from_text. This inconsistency could cause issues downstream.🔎 Proposed fix
except Exception: return { "action": "unknown", "domain": "unknown", + "install_mode": "system", "description": "Failed to extract intent", "ambiguous": True, "confidence": 0.0, }Based on past review comments, this was previously flagged.
230-238: Fix JSON syntax error in intent prompt template.Line 234 has a syntax error:
"install_mode" "..."is missing the colon between key and value. This malformed JSON example could confuse the LLM and produce invalid responses.🔎 Proposed fix
Format: { "action": "...", "domain": "...", - "install_mode" "..." + "install_mode": "...", "description": "...", "ambiguous": true/false, "confidence": 0.0 }Based on past review comments, this was previously flagged but remains unaddressed.
258-270: Add error handling for API call and JSON parsing.Unlike
_call_openai, this method lacks try/except handling. Both the API call andjson.loadscan raise exceptions that will propagate unhandled.🔎 Proposed fix
def _extract_intent_openai(self, user_input: str) -> dict: + try: response = self.client.chat.completions.create( model=self.model, messages=[ {"role": "system", "content": self._get_intent_prompt()}, {"role": "user", "content": user_input}, ], temperature=0.2, max_tokens=300, ) content = response.choices[0].message.content.strip() - return json.loads(content) + return self._parse_intent_from_text(content) + except Exception: + return { + "action": "unknown", + "domain": "unknown", + "install_mode": "system", + "description": "Failed to extract intent from OpenAI", + "ambiguous": True, + "confidence": 0.0, + }Based on past review comments, this was marked as addressed but the fix doesn't appear in the current code.
293-300: Fallback dict missinginstall_modekey.Line 285 validates that
install_modeexists in the parsed JSON, but the fallback dict (lines 294-300) doesn't include it. This will causeKeyErroror returnNonewhen downstream code accessesintent.get("install_mode").🔎 Proposed fix
return { "action": "unknown", "domain": "unknown", + "install_mode": "system", "description": "Unstructured intent output", "ambiguous": True, "confidence": 0.0, }Based on past review comments, this was previously flagged.
cortex/cli.py (2)
587-597: Fix hardcoded success message for fake provider.The success message
"docker installed successfully!"is hardcoded but the fake provider path handles any software, not just Docker. Use the actualsoftwarevariable.🔎 Proposed fix
if execute: - print("\ndocker installed successfully!") + print(f"\n{software} installed successfully!")Based on past review comments, this was flagged previously but remains unaddressed.
655-657: Remove duplicate comment line.There's a duplicate comment
# ---------- User confirmation ----------on consecutive lines (655-656).🔎 Proposed fix
- # ---------- User confirmation ---------- # ---------- User confirmation ---------- if execute:Based on past review comments, this was marked as addressed but the duplicate remains.
🧹 Nitpick comments (6)
cortex/cli.py (2)
34-36: Add return type hint for PEP 8 compliance.The module-level helper function lacks a return type annotation.
🔎 Proposed fix
-def _is_interactive(): +def _is_interactive() -> bool: return sys.stdin.isatty()As per coding guidelines, type hints are required in Python code.
44-53: Add return type hint and consider initializingstdin_datain__init__.The method has a docstring but lacks a return type hint. Additionally,
stdin_datais accessed viagetattrwith a fallback, but it's never initialized in__init__. This works but is fragile.🔎 Proposed fix
Add to
__init__:def __init__(self, verbose: bool = False): self.spinner_chars = ["⠋", "⠙", "⠹", "⠸", "⠼", "⠴", "⠦", "⠧", "⠇", "⠏"] self.spinner_idx = 0 self.verbose = verbose self.stdin_data: str | None = None # Add thisUpdate method signature:
- def _build_prompt_with_stdin(self, user_prompt: str) -> str: + def _build_prompt_with_stdin(self, user_prompt: str) -> str:Based on past review comments,
stdin_datainitialization was previously flagged.test_parallel_llm.py (3)
29-60: Add type hints for test functions.Per coding guidelines, type hints are required. The async test functions lack return type annotations.
🔎 Proposed fix
-async def test_async_completion(): +async def test_async_completion() -> bool: """Test basic async completion."""Apply similar changes to other test functions.
As per coding guidelines, type hints are required in Python code.
146-149: Avoid accessing private semaphore attribute.Accessing
router._rate_limit_semaphore._valuerelies on internal implementation details that could change. Consider testing rate limiting behavior through observable outcomes instead.- print(f" Semaphore value: {router._rate_limit_semaphore._value}") + # Rate limiting verified by successful completion of 5 requests with limit of 2
229-234: Sequential test could be misleading due to await overhead.The sequential simulation uses
awaitin a loop, which is correct, but the dict unpacking**{k: v for k, v in req.items() if k != "task_type"}is unusual. Theacompletemethod acceptstask_typedirectly, so this filtering seems unnecessary.🔎 Proposed fix
for req in requests: - await router.acomplete( - **{k: v for k, v in req.items() if k != "task_type"}, task_type=req["task_type"] - ) + await router.acomplete( + messages=req["messages"], + task_type=req["task_type"], + max_tokens=req.get("max_tokens", 4096), + )cortex/llm/interpreter.py (1)
562-573: Add type hint and handle FAKE provider inextract_intent.The
extract_intentmethod lacks a return type hint and doesn't handle theFAKEprovider, which will raise aValueError. Consider adding support or a more descriptive error.🔎 Proposed fix
- def extract_intent(self, user_input: str) -> dict: + def extract_intent(self, user_input: str) -> dict[str, Any]: if not user_input or not user_input.strip(): raise ValueError("User input cannot be empty") if self.provider == APIProvider.OPENAI: return self._extract_intent_openai(user_input) elif self.provider == APIProvider.CLAUDE: raise NotImplementedError("Intent extraction not yet implemented for Claude") elif self.provider == APIProvider.OLLAMA: return self._extract_intent_ollama(user_input) + elif self.provider == APIProvider.FAKE: + # Return a deterministic intent for testing + return { + "action": "install", + "domain": "unknown", + "install_mode": "system", + "description": "Fake provider intent", + "ambiguous": False, + "confidence": 1.0, + } else: raise ValueError(f"Unsupported provider: {self.provider}")
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
cortex/cli.pycortex/llm/interpreter.pytest_parallel_llm.py
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
**/*.py: Follow PEP 8 style guide
Type hints required in Python code
Docstrings required for all public APIs
Files:
test_parallel_llm.pycortex/cli.pycortex/llm/interpreter.py
🧠 Learnings (2)
📚 Learning: 2025-12-11T12:03:24.071Z
Learnt from: CR
Repo: cortexlinux/cortex PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-11T12:03:24.071Z
Learning: Applies to **/*install*.py : Dry-run by default for all installations in command execution
Applied to files:
cortex/cli.py
📚 Learning: 2025-12-11T12:03:24.071Z
Learnt from: CR
Repo: cortexlinux/cortex PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-11T12:03:24.071Z
Learning: Applies to **/*install*.py : No silent sudo execution - require explicit user confirmation
Applied to files:
cortex/cli.py
🧬 Code graph analysis (2)
test_parallel_llm.py (1)
cortex/llm_router.py (8)
LLMRouter(76-846)TaskType(32-42)check_hardware_configs_parallel(979-1027)diagnose_errors_parallel(930-976)query_multiple_packages(880-927)acomplete(543-609)complete_batch(765-846)set_rate_limit(534-541)
cortex/cli.py (1)
cortex/llm/interpreter.py (3)
CommandInterpreter(18-573)parse(458-516)extract_intent(562-573)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
- GitHub Check: Test (Python 3.10)
- GitHub Check: test (3.11)
- GitHub Check: test (3.12)
- GitHub Check: test (3.10)
🔇 Additional comments (4)
cortex/cli.py (2)
611-633: LGTM! Intent extraction and prompt building logic is well-structured.The intent extraction integration with
install_modedetection and conditional prompt building for Python vs system packages follows the PR objectives for install strategy selection.
657-702: User confirmation flow handles interactive and non-interactive modes correctly.The implementation properly:
- Auto-approves in non-interactive mode (CI/pytest)
- Provides y/e/n choice flow in interactive mode
- Guards edit mode against non-interactive context
- Handles edited commands with re-confirmation
However,
input()calls could still raiseEOFErrorin edge cases. Consider wrapping in try/except for robustness.🔎 Optional defensive improvement
else: + try: print("\nDo you want to proceed with these commands?") print(" [y] Yes, execute") print(" [e] Edit commands") print(" [n] No, cancel") choice = input("Enter choice [y/e/n]: ").strip().lower() + except (EOFError, KeyboardInterrupt): + print("\n❌ Installation cancelled.") + return 0test_parallel_llm.py (2)
1-27: LGTM! Test module setup is appropriate.The imports and path setup are reasonable for a test script. The module docstring clearly explains the test purposes.
254-314: LGTM! Main orchestration and exit code handling.The
main()function properly checks for API keys, runs all tests, and returns an appropriate exit code based on results.
|
Closing in favour of #411 |



This PR improves the Natural Language Install flow to make it safe by default,
demo-ready, and stable despite LLM nondeterminism, while keeping semantics fully
LLM-driven.
closes #248
What’s implemented:
Documentation:
examples, and requirement justification
Tests:
preview vs execute behavior, safety checks, and k8s understanding
This satisfies all acceptance criteria for the Natural Language Install polish
and edge-case handling issue.
Summary by CodeRabbit
New Features
Documentation
Tests
UX
✏️ Tip: You can customize this high-level summary in your review settings.